5 research outputs found

    Performance-effective operation below Vcc-min

    Get PDF
    Continuous circuit miniaturization and increased process variability point to a future with diminishing returns from dynamic voltage scaling. Operation below Vcc-min has been proposed recently as a mean to reverse this trend. The goal of this paper is to minimize the performance loss due to reduced cache capacity when operating below Vcc-min. A simple method is proposed: disable faulty blocks at low voltage. The method is based on observations regarding the distributions of faults in an array according to probability theory. The key lesson, from the probability analysis, is that as the number of uniformly distributed random faulty cells in an array increases the faults increasingly occur in already faulty blocks. The probability analysis is also shown to be useful for obtaining insight about the reliability implications of other cache techniques. For one configuration used in this paper, block disabling is shown to have on the average 6.6% and up to 29% better performance than a previously proposed scheme for low voltage cache operation. Furthermore, block-disabling is simple and less costly to implement and does not degrade performance at or above Vcc-min operation. Finally, it is shown that a victim-cache enables higher and more deterministic performance for a block-disabled cache

    Protecting Prediction Arrays against Faults

    No full text
    Abstract — Continuous circuit and wire miniaturization increasingly exert more pressure on the computer designers to address the issue of reliable operation in the presence of faults. Virtually all previous work on processor reliability addresses problems due to faults in architectural structures, such as the register file or caches. However, faults can happen in nonarchitectural resources, such as predictors and replacement bits. Although non-architectural faults do not affect correctness they can degrade a processor performance significantly and, therefore, may render them as important to deal with as architectural faults. This paper quantifies the performance implications of faults in a line-predictor, and shows that performance can drop significantly when the line-predictor has faulty entries. In particular, a simulation based worst-case analysis of a high-end processor that experiences faults in 1 % of the entries in the line-predictor, revealed an average performance degradation of 8 % and up to 26%. For solutions we point at no bit-interleaving as a more faulttolerant design style for prediction arrays and to a hardware protection scheme based on address-remapping. This scheme is able to recover most of the performance loss when up to 5 % of the line-predictor entries are faulty and when no faults exist it does not degrade performance. I
    corecore